Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50.394
Filtrar
1.
Opt Express ; 32(7): 11934-11951, 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38571030

RESUMEN

Optical coherence tomography (OCT) can resolve biological three-dimensional tissue structures, but it is inevitably plagued by speckle noise that degrades image quality and obscures biological structure. Recently unsupervised deep learning methods are becoming more popular in OCT despeckling but they still have to use unpaired noisy-clean images or paired noisy-noisy images. To address the above problem, we propose what we believe to be a novel unsupervised deep learning method for OCT despeckling, termed Double-free Net, which eliminates the need for ground truth data and repeated scanning by sub-sampling noisy images and synthesizing noisier images. In comparison to existing unsupervised methods, Double-free Net obtains superior denoising performance when trained on datasets comprising retinal and human tissue images without clean images. The efficacy of Double-free Net in denoising holds significant promise for diagnostic applications in retinal pathologies and enhances the accuracy of retinal layer segmentation. Results demonstrate that Double-free Net outperforms state-of-the-art methods and exhibits strong convenience and adaptability across different OCT images.


Asunto(s)
Algoritmos , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Retina/diagnóstico por imagen , Cintigrafía , Procesamiento de Imagen Asistido por Computador/métodos
2.
Sci Data ; 11(1): 330, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38570515

RESUMEN

Variations in color and texture of histopathology images are caused by differences in staining conditions and imaging devices between hospitals. These biases decrease the robustness of machine learning models exposed to out-of-domain data. To address this issue, we introduce a comprehensive histopathology image dataset named PathoLogy Images of Scanners and Mobile phones (PLISM). The dataset consisted of 46 human tissue types stained using 13 hematoxylin and eosin conditions and captured using 13 imaging devices. Precisely aligned image patches from different domains allowed for an accurate evaluation of color and texture properties in each domain. Variation in PLISM was assessed and found to be significantly diverse across various domains, particularly between whole-slide images and smartphones. Furthermore, we assessed the improvement in domain shift using a convolutional neural network pre-trained on PLISM. PLISM is a valuable resource that facilitates the precise evaluation of domain shifts in digital pathology and makes significant contributions towards the development of robust machine learning models that can effectively address challenges of domain shift in histological image analysis.


Asunto(s)
Técnicas Histológicas , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Redes Neurales de la Computación , Coloración y Etiquetado , Humanos , Eosina Amarillenta-(YS) , Procesamiento de Imagen Asistido por Computador/métodos , Histología
3.
Sci Rep ; 14(1): 8253, 2024 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589478

RESUMEN

This work presents a deep learning approach for rapid and accurate muscle water T2 with subject-specific fat T2 calibration using multi-spin-echo acquisitions. This method addresses the computational limitations of conventional bi-component Extended Phase Graph fitting methods (nonlinear-least-squares and dictionary-based) by leveraging fully connected neural networks for fast processing with minimal computational resources. We validated the approach through in vivo experiments using two different MRI vendors. The results showed strong agreement of our deep learning approach with reference methods, summarized by Lin's concordance correlation coefficients ranging from 0.89 to 0.97. Further, the deep learning method achieved a significant computational time improvement, processing data 116 and 33 times faster than the nonlinear least squares and dictionary methods, respectively. In conclusion, the proposed approach demonstrated significant time and resource efficiency improvements over conventional methods while maintaining similar accuracy. This methodology makes the processing of water T2 data faster and easier for the user and will facilitate the utilization of the use of a quantitative water T2 map of muscle in clinical and research studies.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Agua , Calibración , Imagen por Resonancia Magnética/métodos , Músculos/diagnóstico por imagen , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo
4.
BMC Med Imaging ; 24(1): 83, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589793

RESUMEN

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.


Asunto(s)
Aprendizaje Profundo , Humanos , Curaduría de Datos , Leucocitos , Redes Neurales de la Computación , Células Sanguíneas , Procesamiento de Imagen Asistido por Computador/métodos
5.
Med Image Anal ; 94: 103153, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569380

RESUMEN

Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Humanos , Pie Diabético/diagnóstico por imagen , Redes Neurales de la Computación , Benchmarking , Procesamiento de Imagen Asistido por Computador/métodos
6.
Med Image Anal ; 94: 103158, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569379

RESUMEN

Magnetic resonance (MR) images collected in 2D clinical protocols typically have large inter-slice spacing, resulting in high in-plane resolution and reduced through-plane resolution. Super-resolution technique can enhance the through-plane resolution of MR images to facilitate downstream visualization and computer-aided diagnosis. However, most existing works train the super-resolution network at a fixed scaling factor, which is not friendly to clinical scenes of varying inter-slice spacing in MR scanning. Inspired by the recent progress in implicit neural representation, we propose a Spatial Attention-based Implicit Neural Representation (SA-INR) network for arbitrary reduction of MR inter-slice spacing. The SA-INR aims to represent an MR image as a continuous implicit function of 3D coordinates. In this way, the SA-INR can reconstruct the MR image with arbitrary inter-slice spacing by continuously sampling the coordinates in 3D space. In particular, a local-aware spatial attention operation is introduced to model nearby voxels and their affinity more accurately in a larger receptive field. Meanwhile, to improve the computational efficiency, a gradient-guided gating mask is proposed for applying the local-aware spatial attention to selected areas only. We evaluate our method on the public HCP-1200 dataset and the clinical knee MR dataset to demonstrate its superiority over other existing methods.


Asunto(s)
Diagnóstico por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Articulación de la Rodilla , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos
7.
Med Image Anal ; 94: 103149, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574542

RESUMEN

The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.


Asunto(s)
Colorantes , Neoplasias , Humanos , Colorantes/química , Coloración y Etiquetado , Algoritmos , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador/métodos
8.
Med Image Anal ; 94: 103157, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574544

RESUMEN

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Humanos , Diagnóstico por Computador/métodos , Endoscopía Gastrointestinal , Procesamiento de Imagen Asistido por Computador/métodos
9.
Sci Data ; 11(1): 366, 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38605079

RESUMEN

Radiomics features (RFs) studies have showed limitations in the reproducibility of RFs in different acquisition settings. To date, reproducibility studies using CT images mainly rely on phantoms, due to the harness of patient exposure to X-rays. The provided CadAIver dataset has the aims of evaluating how CT scanner parameters effect radiomics features on cadaveric donor. The dataset comprises 112 unique CT acquisitions of a cadaveric truck acquired on 3 different CT scanners varying KV, mA, field-of-view, and reconstruction kernel settings. Technical validation of the CadAIver dataset comprises a comprehensive univariate and multivariate GLM approach to assess stability of each RFs extracted from lumbar vertebrae. The complete dataset is publicly available to be applied for future research in the RFs field, and could foster the creation of a collaborative open CT image database to increase the sample size, the range of available scanners, and the available body districts.


Asunto(s)
Vértebras Lumbares , Tomografía Computarizada por Rayos X , Humanos , Cadáver , Procesamiento de Imagen Asistido por Computador/métodos , Vértebras Lumbares/diagnóstico por imagen , 60570 , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos
10.
Comput Biol Med ; 173: 108377, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569233

RESUMEN

Observing cortical vascular structures and functions using laser speckle contrast imaging (LSCI) at high resolution plays a crucial role in understanding cerebral pathologies. Usually, open-skull window techniques have been applied to reduce scattering of skull and enhance image quality. However, craniotomy surgeries inevitably induce inflammation, which may obstruct observations in certain scenarios. In contrast, image enhancement algorithms provide popular tools for improving the signal-to-noise ratio (SNR) of LSCI. The current methods were less than satisfactory through intact skulls because the transcranial cortical images were of poor quality. Moreover, existing algorithms do not guarantee the accuracy of dynamic blood flow mappings. In this study, we develop an unsupervised deep learning method, named Dual-Channel in Spatial-Frequency Domain CycleGAN (SF-CycleGAN), to enhance the perceptual quality of cortical blood flow imaging by LSCI. SF-CycleGAN enabled convenient, non-invasive, and effective cortical vascular structure observation and accurate dynamic blood flow mappings without craniotomy surgeries to visualize biodynamics in an undisturbed biological environment. Our experimental results showed that SF-CycleGAN achieved a SNR at least 4.13 dB higher than that of other unsupervised methods, imaged the complete vascular morphology, and enabled the functional observation of small cortical vessels. Additionally, the proposed method showed remarkable robustness and could be generalized to various imaging configurations and image modalities, including fluorescence images, without retraining.


Asunto(s)
Hemodinámica , Aumento de la Imagen , Aumento de la Imagen/métodos , Cráneo/diagnóstico por imagen , Flujo Sanguíneo Regional/fisiología , Cabeza , Procesamiento de Imagen Asistido por Computador/métodos
11.
Comput Biol Med ; 173: 108390, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569234

RESUMEN

Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.


Asunto(s)
Imagenología Tridimensional , Neoplasias , Humanos , Imagenología Tridimensional/métodos , Rayos X , Radiografía , Neoplasias/diagnóstico por imagen , Respiración , Procesamiento de Imagen Asistido por Computador/métodos
12.
PLoS One ; 19(4): e0299099, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38564618

RESUMEN

Individual muscle segmentation is the process of partitioning medical images into regions representing each muscle. It can be used to isolate spatially structured quantitative muscle characteristics, such as volume, geometry, and the level of fat infiltration. These features are pivotal to measuring the state of muscle functional health and in tracking the response of the body to musculoskeletal and neuromusculoskeletal disorders. The gold standard approach to perform muscle segmentation requires manual processing of large numbers of images and is associated with significant operator repeatability issues and high time requirements. Deep learning-based techniques have been recently suggested to be capable of automating the process, which would catalyse research into the effects of musculoskeletal disorders on the muscular system. In this study, three convolutional neural networks were explored in their capacity to automatically segment twenty-three lower limb muscles from the hips, thigh, and calves from magnetic resonance images. The three neural networks (UNet, Attention UNet, and a novel Spatial Channel UNet) were trained independently with augmented images to segment 6 subjects and were able to segment the muscles with an average Relative Volume Error (RVE) between -8.6% and 2.9%, average Dice Similarity Coefficient (DSC) between 0.70 and 0.84, and average Hausdorff Distance (HD) between 12.2 and 46.5 mm, with performance dependent on both the subject and the network used. The trained convolutional neural networks designed, and data used in this study are openly available for use, either through re-training for other medical images, or application to automatically segment new T1-weighted lower limb magnetic resonance images captured with similar acquisition parameters.


Asunto(s)
Aprendizaje Profundo , Humanos , Femenino , Animales , Bovinos , Procesamiento de Imagen Asistido por Computador/métodos , Posmenopausia , Muslo/diagnóstico por imagen , Músculos , Imagen por Resonancia Magnética/métodos
13.
Biomed Eng Online ; 23(1): 39, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38566181

RESUMEN

BACKGROUND: Congenital heart disease (CHD) is one of the most common birth defects in the world. It is the leading cause of infant mortality, necessitating an early diagnosis for timely intervention. Prenatal screening using ultrasound is the primary method for CHD detection. However, its effectiveness is heavily reliant on the expertise of physicians, leading to subjective interpretations and potential underdiagnosis. Therefore, a method for automatic analysis of fetal cardiac ultrasound images is highly desired to assist an objective and effective CHD diagnosis. METHOD: In this study, we propose a deep learning-based framework for the identification and segmentation of the three vessels-the pulmonary artery, aorta, and superior vena cava-in the ultrasound three vessel view (3VV) of the fetal heart. In the first stage of the framework, the object detection model Yolov5 is employed to identify the three vessels and localize the Region of Interest (ROI) within the original full-sized ultrasound images. Subsequently, a modified Deeplabv3 equipped with our novel AMFF (Attentional Multi-scale Feature Fusion) module is applied in the second stage to segment the three vessels within the cropped ROI images. RESULTS: We evaluated our method with a dataset consisting of 511 fetal heart 3VV images. Compared to existing models, our framework exhibits superior performance in the segmentation of all the three vessels, demonstrating the Dice coefficients of 85.55%, 89.12%, and 77.54% for PA, Ao and SVC respectively. CONCLUSIONS: Our experimental results show that our proposed framework can automatically and accurately detect and segment the three vessels in fetal heart 3VV images. This method has the potential to assist sonographers in enhancing the precision of vessel assessment during fetal heart examinations.


Asunto(s)
Aprendizaje Profundo , Embarazo , Femenino , Humanos , Vena Cava Superior , Ultrasonografía , Ultrasonografía Prenatal/métodos , Corazón Fetal/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
14.
Sci Rep ; 14(1): 7906, 2024 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-38575710

RESUMEN

This paper delves into the specialized domain of human action recognition, focusing on the Identification of Indian classical dance poses, specifically Bharatanatyam. Within the dance context, a "Karana" embodies a synchronized and harmonious movement encompassing body, hands, and feet, as defined by the Natyashastra. The essence of Karana lies in the amalgamation of nritta hasta (hand movements), sthaana (body postures), and chaari (leg movements). Although numerous, Natyashastra codifies 108 karanas, showcased in the intricate stone carvings adorning the Nataraj temples of Chidambaram, where Lord Shiva's association with these movements is depicted. Automating pose identification in Bharatanatyam poses challenges due to the vast array of variations, encompassing hand and body postures, mudras (hand gestures), facial expressions, and head gestures. To simplify this intricate task, this research employs image processing and automation techniques. The proposed methodology comprises four stages: acquisition and pre-processing of images involving skeletonization and Data Augmentation techniques, feature extraction from images, classification of dance poses using a deep learning network-based convolution neural network model (InceptionResNetV2), and visualization of 3D models through mesh creation from point clouds. The use of advanced technologies, such as the MediaPipe library for body key point detection and deep learning networks, streamlines the identification process. Data augmentation, a pivotal step, expands small datasets, enhancing the model's accuracy. The convolution neural network model showcased its effectiveness in accurately recognizing intricate dance movements, paving the way for streamlined analysis and interpretation. This innovative approach not only simplifies the identification of Bharatanatyam poses but also sets a precedent for enhancing accessibility and efficiency for practitioners and researchers in the Indian classical dance.


Asunto(s)
Realidad Aumentada , Humanos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Cabeza , Gestos
15.
PLoS One ; 19(4): e0300122, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38578724

RESUMEN

We introduce the concept photophysical image analysis (PIA) and an associated pipeline for unsupervised probabilistic image thresholding for images recorded by electron-multiplying charge-coupled device (EMCCD) cameras. We base our approach on a closed-form analytic expression for the characteristic function (Fourier-transform of the probability mass function) for the image counts recorded in an EMCCD camera, which takes into account both stochasticity in the arrival of photons at the imaging camera and subsequent noise induced by the detection system of the camera. The only assumption in our method is that the background photon arrival to the imaging system is described by a stationary Poisson process (we make no assumption about the photon statistics for the signal). We estimate the background photon statistics parameter, λbg, from an image which contains both background and signal pixels by use of a novel truncated fit procedure with an automatically determined image count threshold. Prior to this, the camera noise model parameters are estimated using a calibration step. Utilizing the estimates for the camera parameters and λbg, we then introduce a probabilistic thresholding method, where, for the first time, the fraction of misclassified pixels can be determined a priori for a general image in an unsupervised way. We use synthetic images to validate our a priori estimates and to benchmark against the Otsu method, which is a popular unsupervised non-probabilistic image thresholding method (no a priori estimates for the error rates are provided). For completeness, we lastly present a simple heuristic general-purpose segmentation method based on the thresholding results, which we apply to segmentation of synthetic images and experimental images of fluorescent beads and lung cell nuclei. Our publicly available software opens up for fully automated, unsupervised, probabilistic photophysical image analysis.


Asunto(s)
Diagnóstico por Imagen , Electrones , Procesamiento de Imagen Asistido por Computador/métodos , Análisis de Fourier
16.
PLoS One ; 19(4): e0301978, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38603674

RESUMEN

Radiomic features are usually used to predict target variables such as the absence or presence of a disease, treatment response, or time to symptom progression. One of the potential clinical applications is in patients with Parkinson's disease. Robust radiomic features for this specific imaging method have not yet been identified, which is necessary for proper feature selection. Thus, we are assessing the robustness of radiomic features in dopamine transporter imaging (DaT). For this study, we made an anthropomorphic head phantom with tissue heterogeneity using a personal 3D printer (polylactide 82% infill); the bone was subsequently reproduced with plaster. A surgical cotton ball with radiotracer (123I-ioflupane) was inserted. Scans were performed on the two-detector hybrid camera with acquisition parameters corresponding to international guidelines for DaT single photon emission tomography (SPECT). Reconstruction of SPECT was performed on a clinical workstation with iterative algorithms. Open-source LifeX software was used to extract 134 radiomic features. Statistical analysis was made in RStudio using the intraclass correlation coefficient (ICC) and coefficient of variation (COV). Overall, radiomic features in different reconstruction parameters showed a moderate reproducibility rate (ICC = 0.636, p <0.01). Assessment of ICC and COV within CT attenuation correction (CTAC) and non-attenuation correction (NAC) groups and within particular feature classes showed an excellent reproducibility rate (ICC > 0.9, p < 0.01), except for an intensity-based NAC group, where radiomic features showed a good repeatability rate (ICC = 0.893, p <0.01). By our results, CTAC becomes the main threat to feature stability. However, many radiomic features were sensitive to the selected reconstruction algorithm irrespectively to the attenuation correction. Radiomic features extracted from DaT-SPECT showed moderate to excellent reproducibility rates. These results make them suitable for clinical practice and human studies, but awareness of feature selection should be held, as some radiomic features are more robust than others.


Asunto(s)
Proteínas de Transporte de Dopamina a través de la Membrana Plasmática , Nortropanos , 60570 , Humanos , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Emisión de Fotón Único , Computadores
17.
Sci Rep ; 14(1): 8504, 2024 04 12.
Artículo en Inglés | MEDLINE | ID: mdl-38605094

RESUMEN

This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Cuello Uterino , Femenino , Humanos , Neoplasias del Cuello Uterino/diagnóstico por imagen , Estudios de Factibilidad , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Planificación de la Radioterapia Asistida por Computador/métodos
18.
PLoS One ; 19(4): e0301132, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38626138

RESUMEN

Magnetic Resonance Imaging (MRI) datasets from epidemiological studies often show a lower prevalence of motion artifacts than what is encountered in clinical practice. These artifacts can be unevenly distributed between subject groups and studies which introduces a bias that needs addressing when augmenting data for machine learning purposes. Since unreconstructed multi-channel k-space data is typically not available for population-based MRI datasets, motion simulations must be performed using signal magnitude data. There is thus a need to systematically evaluate how realistic such magnitude-based simulations are. We performed magnitude-based motion simulations on a dataset (MR-ART) from 148 subjects in which real motion-corrupted reference data was also available. The similarity of real and simulated motion was assessed by using image quality metrics (IQMs) including Coefficient of Joint Variation (CJV), Signal-to-Noise-Ratio (SNR), and Contrast-to-Noise-Ratio (CNR). An additional comparison was made by investigating the decrease in the Dice-Sørensen Coefficient (DSC) of automated segmentations with increasing motion severity. Segmentation of the cerebral cortex was performed with 6 freely available tools: FreeSurfer, BrainSuite, ANTs, SAMSEG, FastSurfer, and SynthSeg+. To better mimic the real subject motion, the original motion simulation within an existing data augmentation framework (TorchIO), was modified. This allowed a non-random motion paradigm and phase encoding direction. The mean difference in CJV/SNR/CNR between the real motion-corrupted images and our modified simulations (0.004±0.054/-0.7±1.8/-0.09±0.55) was lower than that of the original simulations (0.015±0.061/0.2±2.0/-0.29±0.62). Further, the mean difference in the DSC between the real motion-corrupted images was lower for our modified simulations (0.03±0.06) compared to the original simulations (-0.15±0.09). SynthSeg+ showed the highest robustness towards all forms of motion, real and simulated. In conclusion, reasonably realistic synthetic motion artifacts can be induced on a large-scale when only magnitude MR images are available to obtain unbiased data sets for the training of machine learning based models.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Movimiento (Física) , Encéfalo/diagnóstico por imagen , Corteza Cerebral , Procesamiento de Imagen Asistido por Computador/métodos
19.
J Hazard Mater ; 470: 134154, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38581871

RESUMEN

In this work, a multiplexed colorimetric strategy was initiated for simultaneous and fast visualization of dyes using low-cost and easy-to-prepare indicator papers as sorbents. Response surface methodology (RSM) was employed to model statistically and optimize the process variables for dyes extraction and colorimetric assays. Multiplexed colorimetry was realized by virtue of synchronous color alignments from different dimensions of multiple dyes co-stained colorimetric cards under RSM-optimized conditions, and smartphone-based image analysis was subsequently performed from different modes to double-check the credibility of colorimetric assays. As concept-to-proof trials, simultaneous visualization of dyes in both beverages and simulated dye effluents was experimentally proved with results highly matched to HPLC or spiked amounts at RSM-predicted staining time as short as 50 s ∼3 min, giving LODs as low as 0.97 ± 0.22/0.18 ± 0.08 µg/mL (tartrazine/brilliant blue) for multiplexed colorimetry, which much lower than those obtained by single colorimetry. Since this is the first case to propose such a RSM-guided multiplexed colorimetric concept, it will provide a reference for engineering of other all-in-one devices which can realize synchronous visualization applications within limited experimental steps.


Asunto(s)
Colorimetría , Colorantes , Teléfono Inteligente , Colorimetría/métodos , Colorantes/química , Colorantes/análisis , Contaminación de Alimentos/análisis , Tartrazina/análisis , Contaminantes Químicos del Agua/análisis , Contaminantes Químicos del Agua/química , Procesamiento de Imagen Asistido por Computador/métodos , Bencenosulfonatos/química , Bebidas/análisis
20.
PLoS One ; 19(4): e0298287, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38593135

RESUMEN

Cryo-electron micrograph images have various characteristics such as varying sizes, shapes, and distribution densities of individual particles, severe background noise, high levels of impurities, irregular shapes, blurred edges, and similar color to the background. How to demonstrate good adaptability in the field of image vision by picking up single particles from multiple types of cryo-electron micrographs is currently a challenge in the field of cryo-electron micrographs. This paper combines the characteristics of the MixUp hybrid enhancement algorithm, enhances the image feature information in the pre-processing stage, builds a feature perception network based on the channel self-attention mechanism in the forward network of the Swin Transformer model network, achieving adaptive adjustment of self-attention mechanism between different single particles, increasing the network's tolerance to noise, Incorporating PReLU activation function to enhance information exchange between pixel blocks of different single particles, and combining the Cross-Entropy function with the softmax function to construct a classification network based on Swin Transformer suitable for cryo-electron micrograph single particle detection model (Swin-cryoEM), achieving mixed detection of multiple types of single particles. Swin-cryoEM algorithm can better solve the problem of good adaptability in picking single particles of many types of cryo-electron micrographs, improve the accuracy and generalization ability of the single particle picking method, and provide high-quality data support for the three-dimensional reconstruction of a single particle. In this paper, ablation experiments and comparison experiments were designed to evaluate and compare Swin-cryoEM algorithms in detail and comprehensively on multiple datasets. The Average Precision is an important evaluation index of the evaluation model, and the optimal Average Precision reached 95.5% in the training stage Swin-cryoEM, and the single particle picking performance was also superior in the prediction stage. This model inherits the advantages of the Swin Transformer detection model and is superior to mainstream models such as Faster R-CNN and YOLOv5 in terms of the single particle detection capability of cryo-electron micrographs.


Asunto(s)
Algoritmos , Electrones , Microscopía por Crioelectrón/métodos , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...